new deepfake method
Microsoft researcher describes two new deepfake methods and their risks
Eric Joel Horvitz is a computer scientist and director of the Microsoft Research Lab in Redmond. In a new research paper, he describes two new deepfake methods and their far-reaching risks. In the research paper, "On the Horizon: Interactive and Compositional Deepfakes," Horvitz describes two new deepfake methods that he believes are technically possible in the future and "that we can expect to come into practice with costly implications for society." "Interactive deepfakes" is what Horvitz calls multimodal deepfake clones of real people that are indistinguishable from the real person during video phone calls, for example. Current deepfake systems are mostly limited to exchanging faces – and even that offers only limited interaction possibilities.
RigNeRF: A New Deepfakes Method That Uses Neural Radiance Fields
What you're seeing in the image above (middle image, man in blue shirt), as well as the image directly below (left image, man in blue shirt), is not a'real' video into which a small patch of'fake' face has been superimposed, but an entirely synthesized scene that exists solely as a volumetric neural rendering – including the body and background: In the example directly above, the real-life video on the right (woman in red dress) is used to'puppet' the captured identity (man in blue shirt) on the left via RigNeRF, which (the authors claim) is the first NeRF-based system to achieve separation of pose and expression while being able to perform novel view syntheses. The male figure on the left in the image above was'captured' from a 70-second smartphone video, and the input data (including the entire scene information) subsequently trained across 4 V100 GPUs to obtain the scene. Since 3DMM-style parametric rigs are also available as entire-body parametric CGI proxies (rather than just face rigs), RigNeRF potentially opens up the possibility of full-body deepfakes where real human movement, texture and expression is passed to the CGI-based parametric layer, which would then translate action and expression into rendered NeRF environments and videos. As for RigNeRF – does it qualify as a deepfake method in the current sense that the headlines understand the term? Or is it just another semi-hobbled also-ran to DeepFaceLab and other labor-intensive, 2017-era autoencoder deepfake systems?
New Deepfake Method Can Put Words In Anyone's Mouth
A woman looks at the camera and says, "Knowledge is one thing, virtue is another." Then, she says, "Knowledge is virtue." The same person, with the same voice, says two conflicting statements--but she only said the first in real life. The second statement is the work of an AI system that took audio of her speech and turned it into a video. Researchers from Nanyang Technological University in Singapore, the National Laboratory of Pattern Recognition in China, and artificial intelligence software company SenseTime developed the method for creating deepfakes from audio sources.
Researchers Come Out With Yet Another Unnerving, New Deepfake Method
Deepfakes, ultrarealistic fake videos manipulated using machine learning, are getting pretty convincing. And researchers continue to develop new methods to create these types of videos, for better or, more likely, for worse. The most recent method comes from researchers at Carnegie Mellon University, who have figured out a way to automatically transfer the "style" of one person to another. "For instance, Barack Obama's style can be transformed into Donald Trump," the researchers wrote in the description of a YouTube video highlighting the outcome of this method. The video shows the facial expressions of John Oliver transferred to both Stephen Colbert and an animated frog, from Martin Luther King, Jr. to Obama, and from Obama to Trump.
- Information Technology > Security & Privacy (0.75)
- Government > Regional Government > North America Government > United States Government (0.61)
- Media (0.58)